enhancing robustness
Enhancing Robustness of Last Layer Two-Stage Fair Model Corrections
Last-layer retraining methods have emerged as an efficient framework for correcting existing base models. Within this framework, several methods have been proposed to deal with correcting models for subgroup fairness with and without group membership information. Importantly, prior work has demonstrated that many methods are susceptible to noisy labels. To this end, we propose a drop-in correction for label noise in last-layer retraining, and demonstrate that it achieves state-of-the-art worst-group accuracy for a broad range of symmetric label noise and across a wide variety of datasets exhibiting spurious correlations. Our proposed approach uses label spreading on a latent nearest neighbors graph and has minimal computational overhead compared to existing methods.
Enhancing Robustness in Deep Reinforcement Learning: A Lyapunov Exponent Approach
Deep reinforcement learning agents achieve state-of-the-art performance in a wide range of simulated control tasks. However, successful applications to real-world problems remain limited. One reason for this dichotomy is because the learnt policies are not robust to observation noise or adversarial attacks. In this paper, we investigate the robustness of deep RL policies to a single small state perturbation in deterministic continuous control tasks. We demonstrate that RL policies can be deterministically chaotic, as small perturbations to the system state have a large impact on subsequent state and reward trajectories.
Enhancing Robustness of Graph Neural Networks on Social Media with Explainable Inverse Reinforcement Learning
Adversarial attacks against graph neural networks (GNNs) through perturbations of the graph structure are increasingly common in social network tasks like rumor detection. Social media platforms capture diverse attack sequence samples through both machine and manual screening processes. Investigating effective ways to leverage these adversarial samples to enhance robustness is imperative. We improve the maximum entropy inverse reinforcement learning (IRL) method with the mixture-of-experts approach to address multi-source graph adversarial attacks. We develop precise sample guidance and a bidirectional update mechanism to reduce the deviation caused by imprecise feature representation and negative sampling within the large action space of social graphs, while also accelerating policy learning.
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective
Federated learning (FL) is a popular distributed learning framework that trains a global model through iterative communications between a central server and edge devices. Recent works have demonstrated that FL is vulnerable to model poisoning attacks. However, we empirically show that under extremely strong attacks, these defensive methods fail to guarantee the robustness of FL. More importantly, we observe that as long as the global model is polluted, the impact of attacks on the global model will remain in subsequent rounds even if there are no subsequent attacks. In this work, we propose a client-based defense, named White Blood Cell for Federated Learning (FL-WBC), which can mitigate model poisoning attacks that have already polluted the global model.
Enhancing Robustness of Graph Neural Networks through p-Laplacian
Sirohi, Anuj Kumar, Halder, Subhanu, Kumar, Kabir, Kumar, Sandeep
With the increase of data in day-to-day life, businesses and different stakeholders need to analyze the data for better predictions. Traditionally, relational data has been a source of various insights, but with the increase in computational power and the need to understand deeper relationships between entities, the need to design new techniques has arisen. For this graph data analysis has become an extraordinary tool for understanding the data, which reveals more realistic and flexible modelling of complex relationships. Recently, Graph Neural Networks (GNNs) have shown great promise in various applications, such as social network analysis, recommendation systems, drug discovery, and more. However, many adversarial attacks can happen over the data, whether during training (poisoning attack) or during testing (evasion attack), which can adversely manipulate the desired outcome from the GNN model. Therefore, it is crucial to make the GNNs robust to such attacks. The existing robustness methods are computationally demanding and perform poorly when the intensity of attack increases. This paper presents a computationally efficient framework, namely, pLapGNN, based on weighted p-Laplacian for making GNNs robust. Empirical evaluation on real datasets establishes the efficacy and efficiency of the proposed method.
- North America > United States > New York > New York County > New York City (0.05)
- Asia > India > NCT > Delhi (0.04)
- Information Technology > Security & Privacy (0.37)
- Government > Military (0.37)
Enhancing Robustness of Human Detection Algorithms in Maritime SAR through Augmented Aerial Images to Simulate Weather Conditions
Tjia, Miguel, Kim, Artem, Wijaya, Elaine Wynette, Tefara, Hanna, Zhu, Kevin
Through the utilizations of YOLO, we were able to run different weather conditions and lighting from our augmented dataset for training. YOLO then utilizes CNNs to apply a series of convolutions and pooling layers to the input image, where the convolution layers are able to extract the main features of the image [2]. Through this, our YOLO model is able to learn to differentiate different objects which may considerably improve its accuracy, possibly enhancing the efficiency of SAR operations through enhanced detection accuracy. This paper aims to improve the model's accuracy of human detection in maritime SAR by evaluating a robust datasets containing various elevations and geological locations, as well as through data augmentation which simulates different weather and lighting. We observed that models trained on augmented datasets outperformed their non-augmented counterparts in which the human recall scores ranged from 0.891 to 0.911 with an improvement rate of 3.4% on the YOLOv5l model. Results showed that these models demonstrate greater robustness to real-world conditions in varying of weather, brightness, tint, and contrast.
- Asia > Indonesia > Java > Jakarta > Jakarta (0.06)
- Asia > South Korea > Busan > Busan (0.05)
- North America > United States > Indiana (0.04)
- (2 more...)
- Transportation > Marine (0.47)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.31)
- Government > Military > Coast Guard (0.31)
Enhancing Robustness in Large Language Models: Prompting for Mitigating the Impact of Irrelevant Information
Jiang, Ming, Huang, Tingting, Guo, Biao, Lu, Yao, Zhang, Feng
In recent years, Large language models (LLMs) have garnered significant attention due to their superior performance in complex reasoning tasks. However, recent studies may diminish their reasoning capabilities markedly when problem descriptions contain irrelevant information, even with the use of advanced prompting techniques. To further investigate this issue, a dataset of primary school mathematics problems containing irrelevant information, named GSMIR, was constructed. Testing prominent LLMs and prompting techniques on this dataset revealed that while LLMs can identify irrelevant information, they do not effectively mitigate the interference it causes once identified. A novel automatic construction method, ATF, which enhances the ability of LLMs to identify and self-mitigate the influence of irrelevant information, is proposed to address this shortcoming. This method operates in two steps: first, analysis of irrelevant information, followed by its filtering. The ATF method, as demonstrated by experimental results, significantly improves the reasoning performance of LLMs and prompting techniques, even in the presence of irrelevant information on the GSMIR dataset.
Enhancing robustness of data-driven SHM models: adversarial training with circle loss
Yang, Xiangli, Deng, Xijie, Zhang, Hanwei, Zou, Yang, Yang, Jianxi
Structural health monitoring (SHM) is critical to safeguarding the safety and reliability of aerospace, civil, and mechanical infrastructure. Machine learning-based data-driven approaches have gained popularity in SHM due to advancements in sensors and computational power. However, machine learning models used in SHM are vulnerable to adversarial examples -- even small changes in input can lead to different model outputs. This paper aims to address this problem by discussing adversarial defenses in SHM. In this paper, we propose an adversarial training method for defense, which uses circle loss to optimize the distance between features in training to keep examples away from the decision boundary. Through this simple yet effective constraint, our method demonstrates substantial improvements in model robustness, surpassing existing defense mechanisms.
- Asia > China > Chongqing Province > Chongqing (0.04)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
- (4 more...)
- Information Technology (0.47)
- Energy (0.46)
- Health & Medicine > Consumer Health (0.34)
Enhancing Robustness in Biomedical NLI Models: A Probing Approach for Clinical Trials
Large Language Models have revolutionized various fields and industries, such as Conversational AI, Content Generation, Information Retrieval, Business Intelligence, and Medical, to name a few. One major application in the field of medical is to analyze and investigate clinical trials for entailment tasks.However, It has been observed that Large Language Models are susceptible to shortcut learning, factual inconsistency, and performance degradation with little variation in context. Adversarial and robust testing is performed to ensure the integrity of models output. But, ambiguity still persists. In order to ensure the integrity of the reasoning performed and investigate the model has correct syntactic and semantic understanding probing is used. Here, I used mnestic probing to investigate the Sci-five model, trained on clinical trial. I investigated the model for feature learnt with respect to natural logic. To achieve the target, I trained task specific probes. Used these probes to investigate the final layers of trained model. Then, fine tuned the trained model using iterative null projection. The results shows that model accuracy improved. During experimentation, I observed that size of the probe has affect on the fine tuning process.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)